众所周知,用于视觉机器人操纵的端到端学习会遭受样本效率低下的困扰,需要大量的演示。可以利用空间轮式翻译等效度,或者SE(3) - 等级率,以提高学习机器人操作的样品效率。在本文中,我们介绍了从点云输入中的视觉机器人操作的完全端到端的SE(3) - 等级模型。通过利用谎言群体的表示理论,我们构建了新型SE(3)基于能量的模型,从而允许高度样本有效的端到端学习。我们表明,我们的模型可以在没有先验知识的情况下从头开始学习,但具有高度的样本效率(〜10个演示就足够了)。此外,我们表明,受过训练的模型可以推广到(i)以前看不见的目标对象姿势,(ii)以前看不见的类别目标对象实例,以及(iii)以前看不见的视觉干扰器。我们实验了6型机器人操纵任务,以验证模型的样品效率和概括性。代码可在以下网址找到:https://github.com/tomato1mule/edf
translated by 谷歌翻译
We present SODA: the first publicly available, million-scale high-quality social dialogue dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In contrast to most existing crowdsourced, small-scale dialogue corpora, we distill 1.5M socially-grounded dialogues from a pre-trained language model (InstructGPT; Ouyang et al., 2022). Dialogues are distilled by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets - e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). In addition, extensive evaluations show that COSMO is significantly more natural and consistent on unseen datasets than best-performing dialogue models - e.g., GODEL (Peng et al., 2022), BlenderBot (Roller et al., 2021), DialoGPT (Zhang et al., 2020). Furthermore, it is sometimes even preferred to the original human-written gold responses. We make our data, models, and code public.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Patch-based models, e.g., Vision Transformers (ViTs) and Mixers, have shown impressive results on various visual recognition tasks, alternating classic convolutional networks. While the initial patch-based models (ViTs) treated all patches equally, recent studies reveal that incorporating inductive bias like spatiality benefits the representations. However, most prior works solely focused on the location of patches, overlooking the scene structure of images. Thus, we aim to further guide the interaction of patches using the object information. Specifically, we propose OAMixer (object-aware mixing layer), which calibrates the patch mixing layers of patch-based models based on the object labels. Here, we obtain the object labels in unsupervised or weakly-supervised manners, i.e., no additional human-annotating cost is necessary. Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers. By learning an object-centric representation, we demonstrate that OAMixer improves the classification accuracy and background robustness of various patch-based models, including ViTs, MLP-Mixers, and ConvMixers. Moreover, we show that OAMixer enhances various downstream tasks, including large-scale classification, self-supervised learning, and multi-object recognition, verifying the generic applicability of OAMixer
translated by 谷歌翻译
Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment language models (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entities of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.
translated by 谷歌翻译
While 3D GANs have recently demonstrated the high-quality synthesis of multi-view consistent images and 3D shapes, they are mainly restricted to photo-realistic human portraits. This paper aims to extend 3D GANs to a different, but meaningful visual form: artistic portrait drawings. However, extending existing 3D GANs to drawings is challenging due to the inevitable geometric ambiguity present in drawings. To tackle this, we present Dr.3D, a novel adaptation approach that adapts an existing 3D GAN to artistic drawings. Dr.3D is equipped with three novel components to handle the geometric ambiguity: a deformation-aware 3D synthesis network, an alternating adaptation of pose estimation and image synthesis, and geometric priors. Experiments show that our approach can successfully adapt 3D GANs to drawings and enable multi-view consistent semantic editing of drawings.
translated by 谷歌翻译
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.
translated by 谷歌翻译
我们介绍了新的新闻文章集合,该文章源自伪造和真实的新闻媒体来源,以分析和预测新闻病毒性。与现有的伪造新闻数据集不同,该数据集包含索赔或新闻文章的标题和正文,在此集合中,每篇文章都得到了Facebook参与数的支持,我们认为这是文章病毒性的指标。此外,我们还提供了文章说明和缩略图图像,与该文章在Facebook上共享。这些图像是用对象标签和颜色属性自动注释的。使用基于云的视觉分析工具,还分析了面部的缩略图图像,并用面部属性注释了检测到的面部。我们从经验上研究了该集合对文章病毒性预测的示例任务的使用。
translated by 谷歌翻译
图像修复是计算机视觉中的一项重要且具有挑战性的任务。将过滤的图像恢复到其原始图像有助于各种计算机视觉任务。我们采用非线性激活函数网络(NAFNET)进行快速且轻巧的模型,并添加色彩注意模块,以提取有用的颜色信息以提高精确度。我们提出了一个准确,快速,轻巧的网络,具有多尺度和色彩的关注,以进行Instagram滤波器删除(CAIR)。实验结果表明,所提出的CAIR以快速和轻巧的方式优于现有的Instagram滤波器删除网络,约11 $ \ times $快速$ \ times $和2.4 $ \ times $ ipher,而在IFFI数据集上超过3.69 db psnr。CAIR可以通过高质量成功地删除Instagram过滤器,并以定性结果恢复颜色信息。源代码和预处理的权重可在\ url {https://github.com/hnv-lab/cair}上获得。
translated by 谷歌翻译
扩散模型已显示出令人印象深刻的图像产生性能,并已用于各种计算机视觉任务。不幸的是,使用扩散模型的图像生成非常耗时,因为它需要数千个采样步骤。为了解决这个问题,我们在这里提出了一种新型的金字塔扩散模型,以使用训练有位置嵌入的单个分数函数从更粗的分辨率图像开始生成高分辨率图像。这使图像生成的时间效率抽样可以解决,并在资源有限的训练时也可以解决低批量的大小问题。此外,我们表明,使用单个分数函数可以有效地用于多尺度的超分辨率问题。
translated by 谷歌翻译